Unsupervised learning with stochastic gradient

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Unsupervised learning with stochastic gradient

A stochastic gradient is formulated based on deterministic gradient augmented with Cauchy simulated annealing capable to reach a global minimum with a convergence speed significantly faster when simulated annealing is used alone. In order to solve space-time variant inverse problems known as blind source separation, a novel Helmholtz free energy contrast function, H 1⁄4 E T0S; with imposed ther...

متن کامل

Generalized Stochastic Gradient Learning∗

We study the properties of the generalized stochastic gradient (GSG) learning in forward-looking models. GSG algorithms are a natural and convenient way to model learning when agents allow for parameter drift or robustness to parameter uncertainty in their beliefs. The conditions for convergence of GSG learning to a rational expectations equilibrium are distinct from but related to the well-kno...

متن کامل

www.econstor.eu GENERALIZED STOCHASTIC GRADIENT LEARNING

We study the properties of generalized stochastic gradient (GSG) learning in forward-looking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both differ from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of varia...

متن کامل

Stochastic Unsupervised Learning on Unlabeled Data

In this paper, we introduce a stochastic unsupervised learning method that was used in the 2011 Unsupervised and Transfer Learning (UTL) challenge. This method is developed to preprocess the data that will be used in the subsequent classification problems. Specifically, it performs K-means clustering on principal components instead of raw data to remove the impact of noisy/irrelevant/less-relev...

متن کامل

Online Learning, Stability, and Stochastic Gradient Descent

In batch learning, stability together with existence and uniqueness of the solution corresponds to well-posedness of Empirical Risk Minimization (ERM) methods; recently, it was proved that CVloo stability is necessary and sufficient for generalization and consistency of ERM ([9]). In this note, we introduce CVon stability, which plays a similar role in online learning. We show that stochastic g...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2005

ISSN: 0925-2312

DOI: 10.1016/j.neucom.2004.11.010